39 research outputs found

    Self-Supervised Learning of Terrain Traversability from Proprioceptive Sensors

    Get PDF
    Robust and reliable autonomous navigation in unstructured, off-road terrain is a critical element in making unmanned ground vehicles a reality. Existing approaches tend to rely on evaluating the traversability of terrain based on fixed parameters obtained via testing in specific environments. This results in a system that handles the terrain well that it trained in, but is unable to process terrain outside its test parameters. An adaptive system does not take the place of training, but supplements it. Whereas training imprints certain environments, an adaptive system would imprint terrain elements and the interactions amongst them, and allow the vehicle to build a map of local elements using proprioceptive sensors. Such sensors can include velocity, wheel slippage, bumper hits, and accelerometers. Data obtained by the sensors can be compared to observations from ranging sensors such as cameras and LADAR (laser detection and ranging) in order to adapt to any kind of terrain. In this way, it could sample its surroundings not only to create a map of clear space, but also of what kind of space it is and its composition. By having a set of building blocks consisting of terrain features, a vehicle can adapt to terrain that it has never seen before, and thus be robust to a changing environment. New observations could be added to its library, enabling it to infer terrain types that it wasn't trained on. This would be very useful in alien environments, where many of the physical features are known, but some are not. For example, a seemingly flat, hard plain could actually be soft sand, and the vehicle would sense the sand and avoid it automatically

    Vehicle Detection for RCTA/ANS (Autonomous Navigation System)

    Get PDF
    Using a stereo camera pair, imagery is acquired and processed through the JPLV stereo processing pipeline. From this stereo data, large 3D blobs are found. These blobs are then described and classified by their shape to determine which are vehicles and which are not. Prior vehicle detection algorithms are either targeted to specific domains, such as following lead cars, or are intensity- based methods that involve learning typical vehicle appearances from a large corpus of training data. In order to detect vehicles, the JPL Vehicle Detection (JVD) algorithm goes through the following steps: 1. Take as input a left disparity image and left rectified image from JPLV stereo. 2. Project the disparity data onto a two-dimensional Cartesian map. 3. Perform some post-processing of the map built in the previous step in order to clean it up. 4. Take the processed map and find peaks. For each peak, grow it out into a map blob. These map blobs represent large, roughly vehicle-sized objects in the scene. 5. Take these map blobs and reject those that do not meet certain criteria. Build descriptors for the ones that remain. Pass these descriptors onto a classifier, which determines if the blob is a vehicle or not. The probability of detection is the probability that if a vehicle is present in the image, is visible, and un-occluded, then it will be detected by the JVD algorithm. In order to estimate this probability, eight sequences were ground-truthed from the RCTA (Robotics Collaborative Technology Alliances) program, totaling over 4,000 frames with 15 unique vehicles. Since these vehicles were observed at varying ranges, one is able to find the probability of detection as a function of range. At the time of this reporting, the JVD algorithm was tuned to perform best at cars seen from the front, rear, or either side, and perform poorly on vehicles seen from oblique angles

    A novel tiered sensor fusion approach for terrain characterization and safe landing assessment

    Get PDF
    ©2006 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at the 2006 IEEE Aerospace Conference, March 5-11, 2006, Big Sky, MT.DOI: 10.1109/AERO.2006.1655795This paper presents a novel, tiered sensor fusion methodology for real-time terrain safety assessment. A combination of active and passive sensors, specifically, radar, lidar, and camera, operate in three tiers according to their inherent ranges of operation. Low-level terrain features (e.g. slope, roughness) and high-level terrain features (e.g. hills, craters) are integrated using principles of reasoning under uncertainty. Three methodologies are used to infer landing safety: fuzzy reasoning, probabilistic reasoning, and evidential reasoning. The safe landing predictions from the three fusion engines are consolidated in a subsequent decision fusion stage aimed at combining the strengths of each fusion methodology. Results from simulated spacecraft descents are presented and discussed

    2D/3D Visual Tracker for Rover Mast

    Get PDF
    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion

    Supervised Remote Robot with Guided Autonomy and Teleoperation (SURROGATE): A Framework for Whole-Body Manipulation

    Get PDF
    The use of the cognitive capabilities of humans to help guide the autonomy of robotics platforms in what is typically called “supervised-autonomy” is becoming more commonplace in robotics research. The work discussed in this paper presents an approach to a human-in-the-loop mode of robot operation that integrates high level human cognition and commanding with the intelligence and processing power of autonomous systems. Our framework for a “Supervised Remote Robot with Guided Autonomy and Teleoperation” (SURROGATE) is demonstrated on a robotic platform consisting of a pan-tilt perception head, two 7-DOF arms connected by a single 7-DOF torso, mounted on a tracked-wheel base. We present an architecture that allows high-level supervisory commands and intents to be specified by a user that are then interpreted by the robotic system to perform whole body manipulation tasks autonomously. We use a concept of “behaviors” to chain together sequences of “actions” for the robot to perform which is then executed real time

    Model-based autonomous system for performing dexterous, human-level manipulation tasks

    Get PDF
    This article presents a model based approach to autonomous dexterous manipulation, developed as part of the DARPA Autonomous Robotic Manipulation Software (ARM-S) program. Performing human-level manipulation tasks is achieved through a novel combination of perception in uncertain environments, precise tool use, forceful dual-arm planning and control, persistent environmental tracking, and task level verification. Deliberate interaction with the environment is incorporated into planning and control strategies, which, when coupled with world estimation, allows for refinement of models and precise manipulation. The system takes advantage of sensory feedback immediately with little open-loop execution, attempting true autonomous reasoning and multi-step sequencing that adapts in the face of changing and uncertain environments. A tire change scenario utilizing human tools, discussed throughout the article, is used to described the system approach. A second scenario of cutting a wire is also presented, and is used to illustrate system component reuse and generality.United States. Defense Advanced Research Projects Agency. Autonomous Robotic Manipulation Progra

    Visual target tracking for rover-based planetary exploration

    Get PDF
    Abstract-To command a rover to go to a location of scientific interest on a remote planet, the rover must be capable of reliably tracking the target designated by a scientist from about ten rover lengths away. The rover must maintain lock on the target while traversing rough terrain and avoiding obstacles without the need for communication with Earth. Among the challenges of tracking targets from a rover are the large changes in the appearance and shape of the selected target as the rover approaches it, the limited frame rate at which images can be acquired and processed, and the sudden changes in camera pointing as the rover goes over rocky terrain. We have investigated various techniques for combining 2D and 3D information in order to increase the reliability of visually tracking targets under Mars like conditions. We will present the approaches that we have examined on simulated data and tested onboard the Rocky 8 rover in the JPL Mars Yard and the K9 rover in the ARC Marscape. These techniques include results for 2D trackers, ICP, visual odometry, and 2D/3D trackers

    Design and development of a high-performance, low-cost robotics platform for research and education

    No full text
    Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (leaf 69).by Max Bajracharya.M.Eng
    corecore